40 research outputs found

    Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

    Get PDF
    Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with NN users achieves a secure aggregation overhead of O(NlogN)O(N\log{N}), as opposed to O(N2)O(N^2), while tolerating up to a user dropout rate of 50%50\%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40×40\times speedup over the state-of-the-art protocols with up to N=200N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate

    CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning

    Get PDF
    How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, a fast and scalable approach to this critical problem. CodedPrivateML keeps both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed workers. We characterize CodedPrivateML\u27s privacy threshold and prove its convergence for logistic (and linear) regression. Furthermore, via experiments over Amazon EC2, we demonstrate that CodedPrivateML can provide an order of magnitude speedup (up to 34×\sim 34\times) over the state-of-the-art cryptographic approaches

    Q^2 Dependence of the S_{11}(1535) Photocoupling and Evidence for a P-wave resonance in eta electroproduction

    Full text link
    New cross sections for the reaction epeηpep \to e'\eta p are reported for total center of mass energy WW=1.5--2.3 GeV and invariant squared momentum transfer Q2Q^2=0.13--3.3 GeV2^2. This large kinematic range allows extraction of new information about response functions, photocouplings, and ηN\eta N coupling strengths of baryon resonances. A sharp structure is seen at WW\sim 1.7 GeV. The shape of the differential cross section is indicative of the presence of a PP-wave resonance that persists to high Q2Q^2. Improved values are derived for the photon coupling amplitude for the S11S_{11}(1535) resonance. The new data greatly expands the Q2Q^2 range covered and an interpretation of all data with a consistent parameterization is provided.Comment: 31 pages, 9 figure

    Lossy Coding of Correlated Sources Over a Multiple Access Channel: Necessary Conditions and Separation Results

    No full text
    corecore